专利摘要:
The invention relates to a method for searching images similar to a request image (Ir) in an image collection, a method exploiting a representation of the query image by a characteristic vector associating a weight with each of the characteristics, and comprising an interrogation step (LTU) of an inverted index (II) matching each of the characteristics (C1-C5) with images of the collection (16-18, 11-12), characterized in that the step of querying the inverted index comprises an operation of integrating into a list of one or more images (16-18) of the collection mapped in the inverted index with a first characteristic (C3) selected according to of the weight associated with it in the vector representing the request image, the list integration operation being repeated for another selected characteristic (C1) according to the weight associated with it in the representative vector the query image until the number of images in the collection in the list has reached a target number.
公开号:FR3041794A1
申请号:FR1559289
申请日:2015-09-30
公开日:2017-03-31
发明作者:Adrian Popescu;Borgne Herve Le;Alexandru Lucian Ginsca;Etienne Gadeski
申请人:Commissariat a lEnergie Atomique CEA;Commissariat a lEnergie Atomique et aux Energies Alternatives CEA;
IPC主号:
专利说明:

METHOD AND SYSTEM FOR SEARCHING LIKE-INDEPENDENT SIMILAR IMAGES
THE SCALE OF THE IMAGE COLLECTION
DESCRIPTION
TECHNICAL AREA
The field of the invention is that of data mining, and more particularly that of the image search by the content for which it is desired to find images similar to a purely visual request in the form of an image called a request image. .
STATE OF THE PRIOR ART
In the absence of textual annotations, the search for images can be performed by means of query images that are used to find similar images within a reference image collection.
This visual similarity search process consists of two main phases, indexing the image collection, which is performed offline, and querying, which must be done online. The purpose of indexing is to transform the "pixel" content of images into feature-based (feature extraction) representations, usually of fixed size. The purpose of the querying step is to extract a vector representation of the content of the query image and to compare it to the representations of the images in the collection in order to find the most similar elements.
Vector representations of visual features include: representations that aggregate local descriptors within a fixed-size vector (i.e., visual word bags, Fisher vectors, convolutional neural networks, etc.); representations that encode global characteristics (i.e. color histograms, textures descriptions, etc.); the semantic representations that are obtained by the aggregation of intermediate classifiers and which give probabilities of appearance of individual concepts in the image.
A major problem in looking for images by similarity is the speed of the search that must be done "online". This problem becomes even central when it comes to processing large-scale collections (i.e. billion images). There are three main solutions for accelerating the similarity search process: reducing the size of vector representations by using techniques such as principal component analysis, linear discriminant analysis, vector quantization, and so on. ; the use of search trees (kd-trees, k-means trees, forest of decision trees) that work by partitioning the search space defined by the representative vectors of the images and thus make it possible to accelerate the search process images; the inverse file representation which is inspired by the search for textual documents and is effective if the vectors representing the images of the collection are hollow (parsimonious). This type of structure associates a set of documents with each dimension of the representation space and, in view of the hollow character of the representations, the similar documents are found more efficiently by comparing all the non-zero dimensions of the vector representing the document query with collection documents associated with these dimensions.
Despite their improved efficiency compared to exhaustive comparisons of the representative vectors, the use of these accelerated search methods still requires the realization of a set of mathematical operations to perform similarity calculations between the vector representing the request image and the vectors representing my images from the collection. The search for similar images therefore remains complex, and this complexity increases with the size of the collection.
DISCLOSURE OF THE INVENTION The invention aims at an image search technique with content that is simpler to implement without losing relevance, and that can be applied to very large reference collections. without the search time becoming exorbitant. For this purpose, the invention proposes a method for searching images similar to a request image in an image collection, a method exploiting a representation of the request image by a feature vector associating a weight with each of the characteristics, and comprising a step of querying an inverted index matching each feature with images of the collection. The inverted index interrogation step comprises an operation of integrating into a list of one or more images of the collection mapped in the inverted index with a first characteristic selected according to the weight associated therewith in the vector representing the request image. The list integration operation is repeated for another selected feature based on the weight associated with it in the vector representing the request image as long as the number of images in the collection included in the list does not reaches a target number.
Some preferred but nonlimiting aspects of this method are as follows: the inverted index interrogation step starts with a list integration operation having as a first feature the highest weight characteristic in the vector representing the image request, and continues until the number of images in the list has reached the target number by repeating the list integration operation with another feature of the immediately lower weight characteristic in the list. vector representing the query image; the list integration operation is performed to integrate an image of the matched collection with a feature in the inverted index only if said image is not already included in the list; it comprises a step of determining, from the target number of images in the list, and for each characteristic, a maximum number of images that can be included in the list among the images mapped to said characteristic in the inverted index; it comprises a prior step of indexing the image collection, comprising: for each image of the collection, extracting characteristics of the image to represent the image in the form of a vector of characteristics associating a weight to each of the characteristics; for each feature, arranging the images of the collection according to their weight associated with the feature to create a list of ordered images by descending weights; creating the inverted index by matching each feature with a predefined number of images in the collection corresponding to the first images in the ordered image list associated with the feature. the characteristics are characteristics relating to the presence of visual concepts in an image, the vector representing an image having the weight associated with each of the characteristics a probability of appearance of a visual concept in the image; it further comprises a step of classifying the images integrated in the list, said classification step comprising, for each of the images included in the list, a measure of similarity with the request image; the similarity measure of an image integrated in the list with the request image comprises a comparison of low-level characteristics, respectively high-level, extracted from the request image and low-level characteristics, respectively high-level, extracted the image in the list; it comprises a step of reformulating the vector representing the request image consisting of modifying the weight associated with one or more characteristics that can be confused with other characteristics. The invention also relates to a computer program product comprising program code instructions for performing the steps of the method when said program is executed on a computer. It also extends to a search system of images similar to a request image in an image collection configured so as to implement the method according to the invention.
BRIEF DESCRIPTION OF THE DRAWINGS Other aspects, objects, advantages and characteristics of the invention will appear better on reading the following detailed description of preferred embodiments thereof, given by way of non-limiting example, and made in reference to Figure 1 attached which illustrates the overall scheme of a possible embodiment of the method according to the invention.
DETAILED DESCRIPTION OF PARTICULAR EMBODIMENTS The invention relates to a method of searching documents among the documents of a collection by means of a representation of a request and the documents of the collection by a vector of characteristics associating a weight with each of the features. In the following, we will take the example of a collection of images, without this being all the way limiting, the invention aiming at any type of multimedia document and can be implemented when a vector representation of features of multimedia documents is accessible. The invention thus notably, but not exclusively, relates to the search for images similar to a request image among a collection of images which generally comprises thousands of images, or even millions of images. In particular, the method is intended to create a list of images of the collection similar to the request image whose number of similar images corresponds to a predetermined target number x. It exploits a representation of the request image by a feature vector associating a weight to each of the features, and includes a step of querying an inverted index mapping each of the features to images in the collection.
The process is divided into two main phases: a first so-called indexing phase generally performed "off-line", and a second phase of querying generally performed "online", that is to say in real time during the search similar images proper.
FIG. 1 shows an overall diagram of the method according to the invention. In this figure, the lines in solid lines illustrate the steps carried out "offline" while the lines in dashed lines illustrate the steps carried out "on line". In this same figure, the data and the results of the treatments are represented in rectangles with rounded corners, the different stages of data processing being presented in rectangles. In this figure, the steps and data of the offline indexing phase HL have also been separated from the steps and data of the online search phase EL.
Each of the first and second phases HL, EL includes a step of extracting "EX-CR" (feature extraction in English) characteristics of an image to represent the image in the form of a vector of characteristics associating a weight with each of the characteristics of a set of image characteristics.
During the HL indexing phase, the extraction of EX-CR characteristics is implemented for all the images of the collection stored in a database BdB. During the online search phase EL, the extraction of EX-CR characteristics is implemented for the request image Ir. The images of the collection and the request image are thus described by a vector of the same nature.
In one possible embodiment of the invention, the extraction of characteristics of an EX-CR image comprises a low-level feature extraction "EX-BN" which makes it possible to associate a fixed-size vector with the image. followed by extraction of high-level "EX-S" characteristics from low-level features. Low-level features are typically un-interpretable features, where high-level features are generally understandable to humans.
The low-level characteristics are, for example, visual word bags (BoVW for Bag of Visual Word), histogram-oriented gradient histograms (HOG), Fisher kernels, fully connected layers (called "classification" Of convolutional neural networks, etc.
These low-level characteristics can be stored in a direct index ID associating with each of the images in the collection It, Ip, Iq the fixed size vector resulting from the extraction of low-level characteristics of the image.
The high-level characteristics are, for example, visual characteristics making it possible to form a semantic representation of the image.
It can be an intermediate semantic representation (the characteristics being for example the outputs of the final layer of a convolutional network) or of a semantic representation proper (the characteristics are then relative to the presence of visual concepts in the image, the vector representing an image having for weight associated with each of the characteristics a probability of appearance of a visual concept in the image). Such a semantic representation is typically obtained by aggregating outputs from a bench of visual classifiers that provide probabilities of occurrence of individual concepts in the image. It makes it possible to search for images similar to a query formulated with textual concepts of the representation space instead of query images.
It should be noted that when the reference collection includes images of a specific area, it is possible to adapt the representation space by eliminating features that are not relevant in the context.
After extracting the characteristics of an image, we have a compact representation of the image in the form of a fixed-size vector that can be written as D = {(vlr ρ ±), (v2, p2) , ..., (vn, pn)}, where Vi are the dimensions of the representation vector space and pj are the weights associated with these dimensions for the considered image. The Vi can thus represent a set of visual concepts, p * being the probability of presence of the visual concept vj in the image.
Under the intuition that only a small number of visual concepts are recognizable in an image and should therefore be active in the vector representing an image, we can seek a parsimonious (or "hollow") representation of the image comprising a reduced number non-zero dimensions in the vector representing the image. To do this, the vector D representative of an image is modified so that only a small subset k weights pi remain non-zero. Typically we have k <10 and the vector representing an image is rewritten in:
Dk = {Oi, Pi), (v2, P2 (vn, Pn)}, where all weights pf below the larger k are all set to zero.
This parsimonious representation makes it possible to encode a large amount of information on a small number of dimensions, and makes it possible to make indexing with a reversed file more efficient, as has been demonstrated in the article by A. Ginsca, A. Popescu, H. Le Borgne, N. Ballas, P. Vo, and I. Kanellos entitled "Large-Scale Image Mining with Flickr Groups" in Proc, of Multimedia Modeling Conf. 2015.
The preliminary off-line indexing step HL comprises, as seen previously, for each image of the collection, the extraction of EX-CR characteristics of the image to represent the image in the form of a vector associating a weight to each of the image characteristics. It then comprises the creation "CREA-II" of an inverted index II matching each of the characteristics with a predefined number of images of the collection. By retaining a predefined number of images associated with each of the features, the memory fingerprint of the inverted index can be limited.
This predefined number can be identical for all the characteristics or on the contrary specific to each characteristic. It can be arbitrary (for example, only 1000 images are retained, maximum per characteristic) or can be elaborated according to the target number x of images in the list of similar images by determining, for each characteristic, a maximum number images that can be included in the list. This maximum number of images may or may not be the same for each of the features.
In a possible embodiment making it possible to maximize the relevance of the results, the extraction of EX-CR characteristics is followed by a scheduling operation, for each of the characteristics, of the images of the collection according to their weight associated with the feature to create a list of ordered images by descending weights. Then a creation operation "CREA-II" of the inverted index II is performed that matches each of the characteristics with a predefined number of images of the collection corresponding to the first images in the ordered image list associated with the feature. In the inverted index II, we thus find xt images associated with the characteristic vir these xt images having a weight pt associated with the non-zero characteristic in the vectors representing them. This pre-defined number Xj can in particular, but not necessarily, correspond to the maximum number of images that can be integrated in the list of similar images determined according to the target number x of images in the list of similar images.
In the example of FIG. 1, the inverted index II thus matches: the characteristic C1 with the images 11 and 12 of the reference collection, whose weights associated with this characteristic are respectively 0.9 and 0, 8; characteristic C2 with images 13, 14 and 15 of the reference collection, whose weights associated with this characteristic are respectively 0.8, 0.7 and 0.6; the characteristic C3 with the images 16, 17 and 18 of the reference collection, whose weights associated with this characteristic are respectively 0.9, 0.8 and 0.6.
It will be remembered that, depending on the frequency of occurrence of the characteristic Vt in the collection, the number of images of the collection xt associated with this characteristic may be less than the target number of images x in the list of similar images. .
The online search phase EL comprises, as has been seen above, the extraction of EX-CR characteristics from the request image to represent the request image in the form of a vector of the same type as those representing the images from the reference collection.
In one possible embodiment of the invention, the online search phase comprises a "CONF" reformulation step of the vector representing the request image consisting of modifying, for example increasing, the weight associated with one or more characteristics be confused with one or more characteristics selected according to the weight associated with them in the vector representing the query image (typically the highest weight characteristics are selected).
This reformulation step can exploit a matrix of confusion which captures, for each characteristic, a probability that it is confused with characteristics. This matrix is calculated on a learning basis (which can be independent of the collection) whose truth is given by textual annotations of the target characteristics Vt- Given an annotated image with vit, we consider that this dimension is confused with Vj if the probability associated with the characteristic Vj is greater than that associated with the characteristic ι έ. This confusion is averaged over all the learning images of the target characteristic vt to form the confusion matrix. This matrix thus encodes global dependency relations between the characteristics that are obtained by the aggregation of all the learning images for each dimension vt.
Such a confusion matrix is generally used to analyze classification defects. In the context of the invention, a positive role is given to confusions and the confusion matrix is exploited in order to diversify the representation of the query image by considering not only the characteristics associated with the highest probabilities in the vector representing the image query, but also a set of characteristics with which it is likely that these characteristics associated with the highest probabilities are confused.
In a variant of this embodiment of reformulation of the vector representing the request image, there is furthermore carried out a merging operation of the initial vector (resulting from the extraction of EX-CR characteristics) and the reformulated vector by means of the matrix of confusion. This fusion can be implemented, for example, by successively choosing dimensions included in each of the two vector representations. The usefulness of the fusion is given by the fact that the initial vector encodes a vector representation specific to the image whereas the reformulated vector encodes a representation which is based on more generic relations between the dimensions of the vector.
In what follows, the same vector term representing the request image will be used to designate the initial vector as well as the reformulated vector or the vector resulting from the fusion.
An example of a vector representing the request image is given in FIG. 1, after reordering characteristics according to their weight. This vector thus indicates for a first characteristic C3 a weight of 0.80, for a second characteristic C1 a weight of 0.79, for a third characteristic C4 a weight of 0.76, for a fourth characteristic C2 a weight of 0, 74, etc.
The search phase continues with an LTU interrogation step of the inverted index II to create a list L of images from the collection 16-18, 11, 12 similar to the request image Ir. This list contains a number similar images which corresponds to a predetermined target number x (x = 5 in the example of Figure 1). The LTU interrogation step of the inverted index more particularly comprises an operation of integrating into the list of one or more images 16-18 of the collection mapped in the inverted index II with a first characteristic C3 selected according to the weight associated with it in the vector representing the request image, the list integration operation being repeated for another selected characteristic C1 according to the weight associated with it in the vector representing the request image as long as the number of images in the list has not reached the target number x.
This LTU interrogation step only involves an iteration on the dimensions vt of the vector representing the request image until it has found the x similar images requested. This form of interrogation, dependent on the purpose of the research, accelerates the search process compared to state-of-the-art methods.
Preferably, the interrogation step of the inverted index starts with an operation of integration with the list having as a first characteristic the characteristic C3 of highest weight in the vector representing the request image, and continues as long as the number of images included in the list did not reach the target number by repeating the integration operation to the list with another feature the immediately lower weight characteristic in the vector representing the request image.
Taking the example of FIG. 1, and a target number x = 5, the interrogation step comprising a first integration operation to the list of images 16-18 associated with the characteristic C3 in the inverted index II, this characteristic being that of the strongest weight in the vector representing the image request. A second integration operation to the list is then performed to integrate the list 11-12 images associated with the characteristic C1, which is the weight immediately lower in the vector representing the request image.
The list of similar images L is thus obtained by a concatenation of the lists of the inverted index associated with the characteristics vt of the strongest weight in the vector representing the request image. No arithmetic operation is necessary, except the elimination of possible duplicates, an integration operation to the list being actually performed so as to integrate an image of the collection only if said image is not already included in the list . This process considers each of the characteristics of the representative vector of the request image independently (a feature per integration operation to the list) and is therefore almost independent of the size of the reference collection, which is not the case. none of the state-of-the-art querying methods.
In a possible embodiment, the list of similar images L integrates the entirety of the images mapped in the inverted index with a characteristic vt of high weight in the vector representing the request image. As a variant, only a part of the images mapped in the inverted index with a characteristic vt of high weight in the vector representing the request image is integrated in the list of similar images. This variant can be useful to mitigate the possible negative effects of a bad association of a vt characteristic with the request image, and to avoid too much favoring the integration of images matched with the strongest weight characteristics. . It can in particular be implemented when the predefined number of images mapped in the inverted index with a characteristic vt corresponds to the maximum number of images that can be integrated in the list determined according to the target number x of images in the list of similar images.
In a possible embodiment of the invention shown in FIG. 1, it is possible to reorder the similar images included in the list at the end of the interrogation step LTU of the inverted file II by making a finer comparison. of the request image and images integrated in the list of similar images L. The method may thus comprise a step of "RANK" classification of the images integrated in the list of similar images L, said classification step comprising, for each similar images integrated in the list, a measure of similarity between the request image and the similar image. The images of the list of similar images L are then reordered and integrated into a refined list Lf according to their similarity with the request image.
The computational complexity of this comparison depends solely on the size x of the list of similar images and an appropriate choice of this size allows real-time access to the refined Lf result list.
In another variant, the classification step RANK can be applied to a restricted number of images in the list of similar images L. For example, if the classification is restricted to the first three images in the previous example, the list Lf final could be 17,18,16, 11, 12 because only 16,17 and 18 are reclassified.
The similarity measure may in particular be made by exploiting the vector representations of the request image and images of the list of similar images, in particular, as shown in FIG. 1, the low-level characteristics extracted from the image query and low-level features extracted from the images in the list of similar images that are stored in the direct ID index. In an alternative embodiment, the similarity measure can also be made by exploiting the high-level characteristics of the images (semantic representations typically) in their hollow or complete versions. As illustrative examples, the similarity measure may be a cosine similarity measure or a L2 Euclidean distance measure. The invention is not limited to the method as described above but also extends to a computer program product comprising program code instructions for performing the steps of the method when said program is run on a computer. The invention also relates to a system for implementing the method, and in particular to a search system for images similar to a request image in a collection of images using a representation of the request image by a feature vector. associating a weight to each of the features, comprising: a BdB database in which the image collection is stored and an inverted index II mapping each feature of a set of image features to images in the collection; a processor configured to query the inverted index to create a list of images of the collection similar to the query image by performing an integration operation to the list of one or more images from the collection mapped into the inverted index with a first characteristic C1 selected according to the weight associated with it in the vector representing the request image, and repeating the integration operation with the list for another characteristic C2 selected according to the weight associated with it in the vector representing the query image until the number of images in the list has reached a target number.
This system typically comprises a communication interface for receiving data from a user (in particular the request image) and presenting data to a user (in particular the images included in the list L of similar images of the collection to the image query).
权利要求:
Claims (14)
[1" id="c-fr-0001]
1. A method for searching images similar to a request image (Ir) in an image collection, a method exploiting a representation of the request image by a feature vector associating a weight with each of the characteristics, and comprising a step of interrogation (LTU) of an inverted index (II) matching each of the characteristics (C1-C5) with images of the collection (II-18), characterized in that the interrogation step of the index inverted includes an operation of integrating into a list of one or more images (16-18) of the collection mapped in the inverted index with a first characteristic (C3) selected according to the weight associated with it in the vector representing the request image, the list integration operation being repeated for another selected characteristic (C1) according to the weight associated with it in the vector representing the request image ue the number of images in the collection included in the list did not reach a target number.
[2" id="c-fr-0002]
The method of claim 1, wherein the step of querying the inverted index begins with a list integration operation having as a first feature the highest weight characteristic (C3) in the vector representing the query image, and continues until the number of images in the list has reached the target number by repeating the list integration operation with another characteristic of the immediately lower weight characteristic (Cl ) in the vector representing the request image.
[3" id="c-fr-0003]
3. Method according to one of claims 1 and 2, wherein the list integration operation is performed so as to integrate an image of the collection matched with a characteristic in the inverted index only if said image is not already included in the list.
[4" id="c-fr-0004]
4. Method according to one of claims 1 to 3, comprising a step of determining, from the target number of images in the list, and for each characteristic, a maximum number of images that can be included in the list. among the images mapped to said feature in the inverted index.
[5" id="c-fr-0005]
The method of claim 4, wherein the maximum number of images that can be included in the list is the same for each of the features.
[6" id="c-fr-0006]
6. Method according to one of claims 1 to 5, comprising a prior step of indexing the image collection, comprising: for each image of the collection, the extraction of characteristics (EX, EX-BN, EX- S) of the image to represent the image in the form of a feature vector associating a weight to each of the features; for each feature, arranging the images of the collection according to their weight associated with the feature to create a list of ordered images by descending weights; the creation (CREA-II) of the inverted index (II) by matching each of the characteristics (C1, C2, C3) with a predefined number of images of the collection (11-12, 13-15, 16 -18) corresponding to the first images in the ordered image list associated with the feature.
[7" id="c-fr-0007]
The method of claim 6 taken in combination with one of claims 4 and 5, wherein for each of the features, the predefined number of images in the inverted index corresponds to the maximum number of images that can be integrated into the image. the list.
[8" id="c-fr-0008]
8. Method according to one of claims 1 to 7, wherein the characteristics are characteristics relating to the presence of visual concepts (EX-S) in an image, the vector representing an image having the weight associated with each of the characteristics a probability of appearance of a visual concept in the image.
[9" id="c-fr-0009]
9. Method according to one of claims 1 to 8, further comprising a step of classification (RANK) of the images included in the list (L), said classification step comprising, for each of the images included in the list, a measurement similarity to the request image.
[10" id="c-fr-0010]
The method of claim 9, wherein the measure of similarity of an image included in the list with the request image comprises a comparison of low-level characteristics extracted from the request image and low-level characteristics extracted from the request image. picture in the list.
[11" id="c-fr-0011]
The method of claim 9 taken in combination with claim 8, wherein the measure of similarity of an image included in the list with the request image comprises a comparison of the characteristics relating to the presence of visual concepts in the image. query and features relating to the presence of visual concepts of the image included in the list.
[12" id="c-fr-0012]
12. Method according to one of claims 1 to 11, comprising a reformulation step (CONF) of the vector representing the request image of modifying the weight associated with one or more characteristics that can be confused with other characteristics.
[13" id="c-fr-0013]
A computer program product comprising program code instructions for performing the steps of the method of any one of claims 1 to 12 when said program is run on a computer.
[14" id="c-fr-0014]
14. An image search system similar to a query image in a collection of images using a representation of the query image by a feature vector associating a weight to each of the features, comprising: a database (BdB) in which stores the image collection and an inverted index (II) mapping each feature of a set of image characteristics to images in the collection; a processor configured to interrogate (LTU) the inverted index in order to create a list of images of the collection similar to the request image by coming to perform an integration operation to the list of one or more images of the collection put in correspondence in the inverted index with a first characteristic (C3) selected according to the weight associated with it in the vector representing the request image, and by repeating the integration operation with the list for another selected characteristic according to the weight associated with it in the vector representing the request image as long as the number of images included in the list has not reached a target number.
类似技术:
公开号 | 公开日 | 专利标题
US20200012674A1|2020-01-09|System and methods thereof for generation of taxonomies based on an analysis of multimedia content elements
US10831814B2|2020-11-10|System and method for linking multimedia data elements to web pages
US9256668B2|2016-02-09|System and method of detecting common patterns within unstructured data elements retrieved from big data sources
JP6049693B2|2016-12-21|In-video product annotation using web information mining
US20100042646A1|2010-02-18|System and Methods Thereof for Generation of Searchable Structures Respective of Multimedia Data Content
EP3356955A1|2018-08-08|Method and system for searching for similar images that is nearly independent of the scale of the collection of images
Rabbath et al.2012|Analysing facebook features to support event detection for photo-based facebook applications
EP2289009B1|2019-02-27|Improved assistance device for image recognition
FR2940694A1|2010-07-02|METHOD AND SYSTEM FOR CLASSIFYING DATA FROM DATABASE.
WO2016102153A1|2016-06-30|Semantic representation of the content of an image
EP2907079A1|2015-08-19|Method of classifying a multimodal object
WO2013156374A1|2013-10-24|Method for recognizing a visual context of an image and corresponding device
US20160124971A1|2016-05-05|System and method of detecting common patterns within unstructured data elements retrieved from big data sources
US10180942B2|2019-01-15|System and method for generation of concept structures based on sub-concepts
Boros et al.2019|Automatic page classification in a large collection of manuscripts based on the International Image Interoperability Framework
FR2939537A1|2010-06-11|SYSTEM FOR SEARCHING VISUAL INFORMATION
WO2018138423A1|2018-08-02|Automatic detection of frauds in a stream of payment transactions by neural networks integrating contextual information
Ouni et al.2017|Improving the discriminative power of bag of visual words model
Nilsson et al.2019|A comparison of image and object level annotation performance of image recognition cloud services and custom Convolutional Neural Network models
Bianco et al.2013|Quantitative review of local descriptors for visual search
Singh et al.2018|Large Scale Image Retrieval with Locality Sensitive Hashing
Hervé et al.2009|Document description: what works for images should also work for text?
Liu et al.2019|Dynamic Re-ranking with Deep Features Fusion for Person Re-identification
Yousaf et al.2021|Patch-CNN: Deep learning for logo detection and brand recognition
Sakthivelan et al.2018|An Accurate Efficient and Scalable Event Based Video Search Method Using Spectral Clustering
同族专利:
公开号 | 公开日
FR3041794B1|2017-10-27|
US20180276244A1|2018-09-27|
EP3356955A1|2018-08-08|
WO2017055250A1|2017-04-06|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
WO2011054002A2|2009-11-02|2011-05-05|Microsoft Corporation|Content-based image search|
US8065293B2|2007-10-24|2011-11-22|Microsoft Corporation|Self-compacting pattern indexer: storing, indexing and accessing information in a graph-like data structure|
WO2009081620A1|2007-12-26|2009-07-02|T-Terminology, Ltd.|Dictionary system|
US8429216B2|2008-09-23|2013-04-23|Hewlett-Packard Development Company, L.P.|Generating a hash value from a vector representing a data object|
US9405773B2|2010-03-29|2016-08-02|Ebay Inc.|Searching for more products like a specified product|
GB2487377B|2011-01-18|2018-02-14|Aptina Imaging Corp|Matching interest points|US20180101540A1|2016-10-10|2018-04-12|Facebook, Inc.|Diversifying Media Search Results on Online Social Networks|
US10515289B2|2017-01-09|2019-12-24|Qualcomm Incorporated|System and method of generating a semantic representation of a target image for an image processing operation|
CN107480282A|2017-08-23|2017-12-15|深圳天珑无线科技有限公司|A kind of method and device of picture searching|
法律状态:
2016-09-28| PLFP| Fee payment|Year of fee payment: 2 |
2017-03-31| PLSC| Search report ready|Effective date: 20170331 |
2017-09-29| PLFP| Fee payment|Year of fee payment: 3 |
2018-09-28| PLFP| Fee payment|Year of fee payment: 4 |
2019-09-30| PLFP| Fee payment|Year of fee payment: 5 |
2020-09-30| PLFP| Fee payment|Year of fee payment: 6 |
2021-09-30| PLFP| Fee payment|Year of fee payment: 7 |
优先权:
申请号 | 申请日 | 专利标题
FR1559289A|FR3041794B1|2015-09-30|2015-09-30|METHOD AND SYSTEM FOR SEARCHING LIKE-INDEPENDENT SIMILAR IMAGES FROM THE PICTURE COLLECTION SCALE|FR1559289A| FR3041794B1|2015-09-30|2015-09-30|METHOD AND SYSTEM FOR SEARCHING LIKE-INDEPENDENT SIMILAR IMAGES FROM THE PICTURE COLLECTION SCALE|
EP16775629.5A| EP3356955A1|2015-09-30|2016-09-27|Method and system for searching for similar images that is nearly independent of the scale of the collection of images|
PCT/EP2016/072922| WO2017055250A1|2015-09-30|2016-09-27|Method and system for searching for similar images that is nearly independent of the scale of the collection of images|
US15/763,347| US20180276244A1|2015-09-30|2016-09-27|Method and system for searching for similar images that is nearly independent of the scale of the collection of images|
[返回顶部]